Goto

Collaborating Authors

 multiple view




Appendix for Self-Weighted Contrastive Learning among Multiple Views for Mitigating Representation Degeneration

Neural Information Processing Systems

We denote the hidden layers' features in encoders as Minimizing the left part of Eq. (22) is Eq. For all used datasets, the learning rate is fixed to 0.0003 and the The noise of denoising autoencoder is random Gaussian noise. We do not report the results on Y outubeVideo as this large-scale dataset is beyond the usable range of SVM.


Self-Weighted Contrastive Learning among Multiple Views for Mitigating Representation Degeneration

Neural Information Processing Systems

Contrastive learning (CL) explicitly enlarges the feature representation similarity between semantic-relevant samples, and it is adept at capturing high-level semantics while discarding irrelevant information.


VCR-GauS: View Consistent Depth-Normal Regularizer for Gaussian Surface Reconstruction

Neural Information Processing Systems

Although 3D Gaussian Splatting has been widely studied because of its realistic and efficient novel-view synthesis, it is still challenging to extract a high-quality surface from the point-based representation. Previous works improve the surface by incorporating geometric priors from the off-the-shelf normal estimator. However, there are two main limitations: 1) Supervising normal rendered from 3D Gaussians updates only the rotation parameter while neglecting other geometric parameters; 2) The inconsistency of predicted normal maps across multiple views may lead to severe reconstruction artifacts. In this paper, we propose a Depth-Normal regularizer that directly couples normal with other geometric parameters, leading to full updates of the geometric parameters from normal regularization. We further propose a confidence term to mitigate inconsistencies of normal predictions across multiple views. Moreover, we also introduce a densification and splitting strategy to regularize the size and distribution of 3D Gaussians for more accurate surface modeling. Compared with Gaussian-based baselines, experiments show that our approach obtains better reconstruction quality and maintains competitive appearance quality at faster training speed and 100+ FPS rendering. Our code will be made open-source upon paper acceptance.


Learning Representations by Maximizing Mutual Information Across Views

Neural Information Processing Systems

We propose an approach to self-supervised representation learning based on maximizing mutual information between features extracted from multiple views of a shared context. For example, one could produce multiple views of a local spatio-temporal context by observing it from different locations (e.g., camera positions within a scene), and via different modalities (e.g., tactile, auditory, or visual). Or, an ImageNet image could provide a context from which one produces multiple views by repeatedly applying data augmentation. Maximizing mutual information between features extracted from these views requires capturing information about high-level factors whose influence spans multiple views - e.g., presence of certain objects or occurrence of certain events. Following our proposed approach, we develop a model which learns image representations that significantly outperform prior methods on the tasks we consider. Most notably, using self-supervised learning, our model learns representations which achieve 68.1% accuracy on ImageNet using standard linear evaluation. This beats prior results by over 12% and concurrent results by 7%. When we extend our model to use mixture-based representations, segmentation behaviour emerges as a natural side-effect.


CAESAR: An Embodied Simulator for Generating Multimodal Referring Expression Datasets

Neural Information Processing Systems

Humans naturally use verbal utterances and nonverbal gestures to refer to various objects (known as $\textit{referring expressions}$) in different interactional scenarios. As collecting real human interaction datasets are costly and laborious, synthetic datasets are often used to train models to unambiguously detect relationships among objects. However, existing synthetic data generation tools that provide referring expressions generally neglect nonverbal gestures. Additionally, while a few small-scale datasets contain multimodal cues (verbal and nonverbal), these datasets only capture the nonverbal gestures from an exo-centric perspective (observer). As models can use complementary information from multimodal cues to recognize referring expressions, generating multimodal data from multiple views can help to develop robust models.


Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views

Neural Information Processing Systems

Learning object-centric representations of multi-object scenes is a promising approach towards machine intelligence, facilitating high-level reasoning and control from visual sensory data. However, current approaches for \textit{unsupervised object-centric scene representation} are incapable of aggregating information from multiple observations of a scene. As a result, these ``single-view'' methods form their representations of a 3D scene based only on a single 2D observation (view). Naturally, this leads to several inaccuracies, with these methods falling victim to single-view spatial ambiguities. To address this, we propose \textit{The Multi-View and Multi-Object Network (MulMON)}---a method for learning accurate, object-centric representations of multi-object scenes by leveraging multiple views.


Advanced Unsupervised Learning: A Comprehensive Overview of Multi-View Clustering Techniques

Moujahid, Abdelmalik, Dornaika, Fadi

arXiv.org Artificial Intelligence

Machine learning techniques face numerous challenges to achieve optimal performance. These include computational constraints, the limitations of single-view learning algorithms and the complexity of processing large datasets from different domains, sources or views. In this context, multi-view clustering (MVC), a class of unsupervised multi-view learning, emerges as a powerful approach to overcome these challenges. MVC compensates for the shortcomings of single-view methods and provides a richer data representation and effective solutions for a variety of unsupervised learning tasks. In contrast to traditional single-view approaches, the semantically rich nature of multi-view data increases its practical utility despite its inherent complexity. This survey makes a threefold contribution: (1) a systematic categorization of multi-view clustering methods into well-defined groups, including co-training, co-regularization, subspace, deep learning, kernel-based, anchor-based, and graph-based strategies; (2) an in-depth analysis of their respective strengths, weaknesses, and practical challenges, such as scalability and incomplete data; and (3) a forward-looking discussion of emerging trends, interdisciplinary applications, and future directions in MVC research. This study represents an extensive workload, encompassing the review of over 140 foundational and recent publications, the development of comparative insights on integration strategies such as early fusion, late fusion, and joint learning, and the structured investigation of practical use cases in the areas of healthcare, multimedia, and social network analysis. By integrating these efforts, this work aims to fill existing gaps in MVC research and provide actionable insights for the advancement of the field.


Multi-view Anomaly Detection via Robust Probabilistic Latent Variable Models

Tomoharu Iwata, Makoto Yamada

Neural Information Processing Systems

W e propose probabilistic latent variable models for multi-view anomaly detection, which is the task of finding instances that have inconsi stent views given multi-view data. With the proposed model, all views of a non-anomalous instance are assumed to be generated from a single latent vector. On th e other hand, an anomalous instance is assumed to have multiple latent vecto rs, and its different views are generated from different latent vectors. By infer ring the number of latent vectors used for each instance with Dirichlet process p riors, we obtain multi-view anomaly scores. The proposed model can be seen as a robus t extension of probabilistic canonical correlation analysis for noisy mu lti-view data. W e present Bayesian inference procedures for the proposed model based on a stochastic EM algorithm. The effectiveness of the proposed model is demon strated in terms of performance when detecting multi-view anomalies.